Informatique

What kind of errors are made by neural generation models and why?

20 juillet 2023
Durée : 00:59:24
Nombre de vues 7
Nombre de favoris 0

Lecture presented by Claire Gardent, CNRS/LORIA, Nancy
Joint work with Juliette Faille, CNRS/LORIA and Lorraine University, Nancy , Albert Gatt (U. Utrecht), Quentin Brabant, Gwénolé Lecorvé and Lina Rojas-Barahona (Orange Lanion)

 

In this talk, I will present our work on assessing, analysing and explaining the output of text generation models that are grounded in Knowledge Graphs (KG).

Focusing on KG-to-Text encoder-decoder models i.e., generation models which aim to verbalise the content of a Knowledge Graph, I will discuss missing information i.e., information that is present in the input but not in the  output. I will introduce a novel evaluation metric for assessing to what extent generation models omit input information and show that, while this metric correlates with human scores, correlation varies with the specifics of the human evaluation setup. This  suggests that an automatic metric might be more reliable, as less subjective and more focused on correct verbalisation of the input, than human evaluation measures. I will then go on to demonstrate, using both a parametric and a non-parametric probe, that omissions are already "visible" in the encoder representations i.e., can be tracked back to the encoder. 

In the second part of the talk, I will discuss conversational question generation and show that grounding  dialog in knowledge  allows for a detailed analysis of the model behaviour in terms of well-formedness, relevance, semantic adequacy and dialog coherence. 

Mots clés : ai conversational question generation kg knowledge graphs

 Informations